496 research outputs found

    Multiple Specification of an Object’s Size for Picking it up.

    Get PDF
    Subjects were asked to pick up disks. The apparent size of the disks was manipulated by a visual illusion. Of the two aspects of the action which depend on the objects size, only one was affected by the illusion. We conclude that different aspects of an action are controlled independently, instead of being co-ordinated on the basis of one perceptual variable

    A new view on grasping

    Get PDF
    Reaching out for an object is often described as consisting of two components that are based on different visual information. Information about the object’s position and orientation guides the hand to the object, while information about the object’s shape and size determines how the fingers move relative to the thumb to grasp it. We propose an alternative description, which consists of determining suitable positions on the object — on the basis of its shape, surface roughness, and so on — and then moving one’s thumb and fingers more or less independently to these positions. We modelled this description using a minimum jerk approach, whereby the finger and thumb approach their respective target positions approximately orthogonally to the surface. Our model predicts how experimental variables such as object size, movement speed, fragility, and required accuracy will influence the timing and size of the maximum aperture of the hand. An extensive review of experimental studies on grasping showed that the predicted influences correspond to human behaviour

    Visuomotor delays when hitting running spiders.

    Get PDF
    In general, information about the environment (for instance a target) is not instantaneously available for the nervous system. A minimal delay for visual information to affect the movement of the hand is about 110 ms. However, if the movement of a target is predictable, humans can pursue it with zero delay. To make this prediction, information about the speed of the target is necessary. Our results show that this information is used with a delay of about 200 ms. We discuss that oculomotor efference is a likely source of information for this prediction

    Continuously updating one’s predictions underlies successful interception

    Get PDF
    This paper reviews our understanding of the interception of moving objects. Interception is a demanding task that requires both spatial and temporal precision. The required precision must be achieved on the basis of imprecise and sometimes biased sensory information. We argue that people make precise interceptive movements by continuously adjusting their movements. Initial estimates of how the movement should progress can be quite inaccurate. As the movement evolves, the estimate of how the rest of the movement should progress gradually becomes more reliable as prediction is replaced by sensory information about the progress of the movement. The improvement is particularly important when things do not progress as anticipated. Constantly adjusting one’s estimate of how the movement should progress combines the opportunity to move in a way that one anticipates will best meet the task demands with correcting for any errors in such anticipation. The fact that the ongoing movement might have to be adjusted can be considered when determining how to move, and any systematic anticipation errors can be corrected on the basis of the outcome of earlier actions

    The control of the reach-to-grasp movement

    Get PDF

    Different frames of reference for position and motion

    Get PDF

    Prediction of a moving target's position in fast goal-directed action

    Get PDF
    Subjects made fast goal-directed arm movements towards moving targets. In some cases, the perceived direction of target motion was manipulated by moving the background. By comparing the trajectories towards moving targets with those towards static targets, we determined the position towards which subjects were aiming at movement onset. We showed that this position was an extrapolation in the target's perceived direction from its position at that moment using its perceived direction of motion. If subjects were to continue to extrapolate in the perceived direction of target motion from the position at which they perceive the target at each instant, the error would decrease during the movements. By analysing the differences between subjects' arm movements towards targets moving in different (apparent) directions with a linear second-order model, we show that the reduction in the error that this predicts is not enough to explain how subjects compensate for their initial misjudgements

    Perceived distance, shape and size

    Get PDF
    AbstractIf distance, shape and size are judged independently from the retinal and extra-retinal information at hand, different kinds of information can be expected to dominate each judgement, so that errors in one judgement need not be consistent with errors in other judgements. In order to evaluate how independent these three judgements are, we examined how adding information that improves one judgement influences the others. Subjects adjusted the size and the global shape of a computer-simulated ellipsoid to match a tennis ball. They then indicated manually where they judged the simulated ball to be. Adding information about distance improved the three judgements in a consistent manner, demonstrating that a considerable part of the errors in all three judgements were due to misestimating the distance. Adding information about shape that is independent of distance improved subjects’ judgements of shape, but did not influence the set size or the manually indicated distance. Thus, subjects ignored conflicts between the cues when judging the shape, rather than using the conflicts to improve their estimate of the ellipsoid’s distance. We conclude that the judgements are quite independent, in the sense that no attempt is made to attain consistency, but that they do rely on some common measures, such as that of distance

    The special role of distant structures in perceived object velocity

    Get PDF
    AbstractHow do we judge an object's velocity when we ourselves are moving? Subjects compared the velocity of a moving object before and during simulated ego-motion. The simulation consisted of moving the visible environment relative to the subject's eye in precisely the way that a static environment would move relative to the eye if the subject had moved. The ensuing motion of the background on the screen influenced the perceived target velocity. We found that the motion of the “most distant structure” largely determined the influence of the moving background. Relying on retinal motion relative to that of distant structures is usually a reliable method for accounting for rotations of the eye. It provides an estimate of the object's movement, relative to the observer. This strategy for judging object motion has the advantage that it does not require metric information on depth or detailed knowledge of one's own motion. Copyright © 1996 Elsevier Science Lt

    Humans combine the optic flow with static depth cues for robust perception of heading

    Get PDF
    The retinal flow during normal locomotion contains components due to rotation and translation of the observer. The translatory part of the flow-pattern is informative of heading, because it radiates outward from the direction of heading. However, it is not directly accessible from the retinal flow. Nevertheless, humans can perceive their direction of heading from the compound retinal flow without need for extra-retinal signals that indicate the rotation. Two classes of models have been proposed to explain the visual decomposition of the retinal flow into its constituent parts. One type relies on local operations to remove the rotational part of the flow field. The other type explicitly determines the direction and magnitude of the rotation from the global retinal flow, for subsequent removal. According to the former model, nearby points are most reliable for estimating one's heading. In the latter type of model the quality of the heading estimate depends on the accuracy with which the ego-rotation is determined and is therefore most reliable when based on the most distant points. We report that subjects underestimate the eccentricity of heading, relative to the fixated point in the ground plane, when the visible range of the ground plane is reduced. Moreover we find that in perception of heading, humans can tolerate more noise than the optimal observer (in the least squares sense) would do if only using optic flow. The latter finding argues against both schemes because ultimately both classes of model are limited in their noise tolerance to that of the optimal observer, which uses all information available in the optic flow. Apparently humans use more information than is present in the optic flow. Both aspects of human performance are consistent with the use of static depth information in addition to the optic flow to select the most distant points. Processing of the flow of these selected points provides the most reliable estimate of the ego-rotation. Subsequent estimates of the heading direction, obtained from the translatory component of the flow, are robust with respect to noise. In such a scheme heading estimates are subject to systematic errors, similar to those reported, if the most distant points are not much further away than the fixation point, because the ego-rotation is underestimated
    corecore